Graphic layout designs play an essential role in visual communication. Yet handcrafting layout designs are skill-demanding, time-consuming, and non-scalable to batch production. Although generative models emerge to make design automation no longer utopian, it remains non-trivial to customize designs that comply with designers' multimodal desires, i.e., constrained by background images and driven by foreground contents. In this study, we propose \textit{LayoutDETR} that inherits the high quality and realism from generative modeling, in the meanwhile reformulating content-aware requirements as a detection problem: we learn to detect in a background image the reasonable locations, scales, and spatial relations for multimodal elements in a layout. Experiments validate that our solution yields new state-of-the-art performance for layout generation on public benchmarks and on our newly-curated ads banner dataset. For practical usage, we build our solution into a graphical system that facilitates user studies. We demonstrate that our designs attract more subjective preference than baselines by significant margins. Our code, models, dataset, graphical system, and demos are available at https://github.com/salesforce/LayoutDETR.
translated by 谷歌翻译
The understanding capabilities of current state-of-the-art 3D models are limited by datasets with a small number of annotated data and a pre-defined set of categories. In its 2D counterpart, recent advances have shown that similar problems can be significantly alleviated by employing knowledge from other modalities, such as language. Inspired by this, leveraging multimodal information for 3D modality could be promising to improve 3D understanding under the restricted data regime, but this line of research is not well studied. Therefore, we introduce ULIP to learn a unified representation of image, text, and 3D point cloud by pre-training with object triplets from the three modalities. To overcome the shortage of training triplets, ULIP leverages a pre-trained vision-language model that has already learned a common visual and textual space by training with massive image-text pairs. Then, ULIP learns a 3D representation space aligned with the common image-text space, using a small number of automatically synthesized triplets. ULIP is agnostic to 3D backbone networks and can easily be integrated into any 3D architecture. Experiments show that ULIP effectively improves the performance of multiple recent 3D backbones by simply pre-training them on ShapeNet55 using our framework, achieving state-of-the-art performance in both standard 3D classification and zero-shot 3D classification on ModelNet40 and ScanObjectNN. ULIP also improves the performance of PointMLP by around 3% in 3D classification on ScanObjectNN, and outperforms PointCLIP by 28.8% on top-1 accuracy for zero-shot 3D classification on ModelNet40. Our code and pre-trained models will be released.
translated by 谷歌翻译
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
面部表达是传达人类情绪状态和意图的重要因素。尽管在面部表达识别任务(FER)任务中已经取得了显着进步,但由于表达模式的巨大变化和不可避免的数据不确定性而引起的挑战仍然存在。在本文中,我们提出了中级表示增强(MRE)和嵌入图形抑制(GUS)的图表,以解决这些问题。一方面,引入MRE是为了避免表达表示学习以有限数量的高度歧视模式主导。另一方面,引入GUS以抑制表示空间中的特征歧义。所提出的方法不仅具有更强的概括能力来处理表达模式的不同变化,而且具有更强的稳健性来捕获表达表示。对AFF-WILD2的实验评估已验证了该方法的有效性。
translated by 谷歌翻译
由于医学图像的数据稀缺性和数据异质性是普遍存在的,因此在部署到新站点时,使用先前的归一化方法训练有素的卷积神经网络(CNN)可能会表现不佳。但是,现实世界应用程序的可靠模型应该能够在分布(IND)和分布(OOD)数据(例如新站点数据)上很好地概括。在这项研究中,我们提出了一种称为窗口归一化(WIN)的新型归一化技术,这是现有标准化方法的简单而有效的替代方法。具体而言,赢得了与特征窗口上计算的本地统计数据的归一化统计数据。此功能级增强技术可以很好地规范模型,并显着改善了其OOD的概括。利用它的优势,我们提出了一种称为Win Win的新型自我鉴定方法,以进一步改善分类中的OOD概括。通过两次向前传球和一致性约束可以轻松实现双赢,这对于现有方法来说是一个简单的扩展。关于各种任务(例如青光眼检测,乳腺癌检测,染色体分类,视盘和杯赛分割等)和数据集(26个数据集)的广泛实验结果证明了我们方法的一般性和有效性。该代码可从https://github.com/joe1chief/windownormalizaion获得。
translated by 谷歌翻译
在几乎不可预测且通常严重的主题运动的情况下获得的多个MR Slices的胎儿大脑的体积重建是一项具有挑战性的任务,对切片转换的初始化非常敏感。我们建议使用经过合成转换数据训练的变压器提出了一种新型的切片到体积的注册方法,该数据将MR Slices的多个堆栈模拟为序列。通过注意机制,我们的模型会自动检测切片之间的相关性,并使用来自其他切片的信息预测一个切片的转换。我们还估计了基础3D卷,以帮助切片到体积的注册,并交替更新音量和转换以提高准确性。合成数据的结果表明,与现有的最新方法相比,我们的方法可实现较低的注册误差和更好的重建质量。还进行了使用现实世界中MRI数据的实验,以证明该模型在严重的胎儿运动下提高3D重建质量的能力。
translated by 谷歌翻译
尽管对象检测方面取得了很大进展,但由于实例级边界盒注释所需的巨大人性化,大多数现有方法都仅限于一小一少量的对象类别。为了减轻问题,最近的开放词汇和零射击检测方法试图检测培训期间未见的对象类别。但是,这些方法仍然依赖于一组基类上手动提供的边界盒注释。我们提出了一个开放的词汇检测框架,可以在没有手动提供边界盒注释的情况下培训。我们的方法通过利用预先训练的视觉语言模型的本地化能力来实现这一目标,并产生可直接用于训练对象探测器的伪边界盒标签。 Coco,Pascal VOC,Objects365和LVIS的实验结果证明了我们方法的有效性。具体而言,我们的方法优于使用人类注释的边界箱训练的最先进(SOTA),即使我们的培训源未配备手动边界盒标签,也可以在COCO新型类别上用3%AP培训。在利用手动边界箱标签作为基线时,我们的方法主要超过8%的AP。
translated by 谷歌翻译
本文介绍了基于Wav2VEC 2.0的跨语言语音表示学习的大规模模型。我们在128种语言中培训最多2B个公共讲话音频的近半小时的型号的模型,比公共数据的数量级比最大的已知事先工作。我们的评估涵盖了广泛的任务,域,数据制度和语言,都是高低资源。在Covost-2语音翻译基准测试中,我们将先前的最先进的状态平均为7.4 BLEU超过21个翻译方向进入英语。对于语音识别,XLS-R在Babel,MLS,CommonVoice以及Voxpopuli上的最佳已知工作中提高,降低了相对的误差率14-34%。 XLS-R还在Voxlingua107语言识别上设置了新的技术状态。此外,我们表明,具有足够的模型规模,交叉思维预先预测可以在将英语演讲翻译成其他语言时才能优于英语撇印,这是一个有利于单晶的预借预制的设置。我们希望XLS-R可以帮助改善世界上更多语言的语音处理任务。
translated by 谷歌翻译
移动网络流量预测是日常网络操作中的关键功能之一。商业移动网络大,异质,复杂,动态。这些内在特征使得移动网络流量预测远离诸如最近的高级算法,例如基于Graph卷积网络的预测方法和各种关注机制,也已经证明是在车辆交通预测中成功的。在本文中,我们将问题作为空间序列预测任务。我们提出了一种新的深度学习网络架构,自适应多接收领域空间 - 时间图卷积网络(AMF-STGCN),以模拟移动基站的交通动态。 AMF-STGCN扩展了GCN(1)在移动网络中联合建模的复杂空间 - 时间依赖性,(2)应用注意机制捕获异构基站的各种接收领域,(3)基于完全连接的额外解码器引入额外的解码器深网络以多阶段预测征服错误传播挑战。来自两个不同域的四个真实数据集的实验一致地显示AMF-STGCN优于最先进的方法。
translated by 谷歌翻译
当没有光学信息可用时,在不确定环境下的机器人探索具有挑战性。在本文中,我们提出了一种自主解决方案,即仅基于触觉感测,探索一个未知的任务空间。我们首先根据MEMS晴雨表设备设计了晶须传感器。该传感器可以通过非侵入性与环境进行交互来获取联系信息。该传感器伴随着一种计划技术,可以通过使用触觉感知来产生探索轨迹。该技术依赖于触觉探索的混合政策,其中包括用于对象搜索的主动信息路径计划,以及用于轮廓跟踪的反应性HOPF振荡器。结果表明,混合勘探政策可以提高对象发现的效率。最后,通过细分对象和分类来促进场景的理解。开发了一个分类器,以根据晶须传感器收集的几何特征识别对象类别。这种方法证明了晶须传感器以及触觉智能,可以提供足够的判别特征来区分对象。
translated by 谷歌翻译